Goto

Collaborating Authors

 Wheaton


Optimizing Novelty of Top-k Recommendations using Large Language Models and Reinforcement Learning

Sharma, Amit, Li, Hua, Li, Xue, Jiao, Jian

arXiv.org Artificial Intelligence

Given an input query, a recommendation model is trained using user feedback data (e.g., click data) to output a ranked list of items. In real-world systems, besides accuracy, an important consideration for a new model is novelty of its top-k recommendations w.r.t. an existing deployed model. However, novelty of top-k items is a difficult goal to optimize a model for, since it involves a non-differentiable sorting operation on the model's predictions. Moreover, novel items, by definition, do not have any user feedback data. Given the semantic capabilities of large language models, we address these problems using a reinforcement learning (RL) formulation where large language models provide feedback for the novel items. However, given millions of candidate items, the sample complexity of a standard RL algorithm can be prohibitively high. To reduce sample complexity, we reduce the top-k list reward to a set of item-wise rewards and reformulate the state space to consist of tuples such that the action space is reduced to a binary decision; and show that this reformulation results in a significantly lower complexity when the number of items is large. We evaluate the proposed algorithm on improving novelty for a query-ad recommendation task on a large-scale search engine. Compared to supervised finetuning on recent pairs, the proposed RL-based algorithm leads to significant novelty gains with minimal loss in recall. We obtain similar results on the ORCAS query-webpage matching dataset and a product recommendation dataset based on Amazon reviews.


Models for Capturing Temporal Smoothness in Evolving Networks for Learning Latent Representation of Nodes

Saha, Tanay Kumar, Williams, Thomas, Hasan, Mohammad Al, Joty, Shafiq, Varberg, Nicholas K.

arXiv.org Machine Learning

In a dynamic network, the neighborhood of the vertices evolve across different temporal snapshots of the network. Accurate modeling of this temporal evolution can help solve complex tasks involving real-life social and interaction networks. However, existing models for learning latent representation are inadequate for obtaining the representation vectors of the vertices for different time-stamps of a dynamic network in a meaningful way. In this paper, we propose latent representation learning models for dynamic networks which overcome the above limitation by considering two different kinds of temporal smoothness: (i) retrofitted, and (ii) linear transformation. The retrofitted model tracks the representation vector of a vertex over time, facilitating vertex-based temporal analysis of a network. On the other hand, linear transformation based model provides a smooth transition operator which maps the representation vectors of all vertices from one temporal snapshot to the next (unobserved) snapshot-this facilitates prediction of the state of a network in a future time-stamp. We validate the performance of our proposed models by employing them for solving the temporal link prediction task. Experiments on 9 real-life networks from various domains validate that the proposed models are significantly better than the existing models for predicting the dynamics of an evolving network.


Letters to the Editor

Mostow, Jack, Katke, William, Partridge, Derek, Koton, Phyllis, Estrin, Deborah, Gray, Sharon, Ladin, Rivka, Eisenberg, Mike, Duffy, Gavin, Dorr, Bonnie, Batali, John, Levitt, David, Shirley, Mark, Giansiracusa, Robert, Montalvo, Fanya, Pitman, Kent, Golden, Ellen, Stone, Bob

AI Magazine

And even if verification to be accommodated within the SPIV paradigm. But until were possible it would not contribute very much to the such time as we find these learning algorithms (and I development of production software. Hence "verifiability don't think that many would argue that such algorithms must not be allowed to overshadow reliability. Scientists will be available in the foreseeable future) we must face should not confuse mathematical models with reality." the prospect of systems that will need to be modified, in AI is perhaps not so special, it is rather an extreme nontrivial ways, throughout their useful lives. Thus incremental and thus certain of its characteristics are more obvious development will be a constant feature of such than in conventional software applications. Thus the SPIV software and if it is not fully automatic then it will be part methodology may be inappropriate for an even larger class of the human maintenance of the system. I am, of course, of problems than those of AI. not suggesting that the products of say architectural design I have raised all these points not to try to deny the (i.e., buildings) will need a learning capability. Nevertheless, worth of Mostow's ideas and issues concerning the design a final fixed design, that remains "optimal" in a process, but to make the case that such endeavors should dynamically changing world, is a rare event.The similarity also be pursued within a fundamentally incremental and between AI system development and the design of more evolutionary framework for design. The potential of the concrete objects is still present, but it is, in some respects, RUDE paradigm is deserving of more attention than it is rather tenuous I admit.